Text classification of unseen classes is a challenging Natural Language Processing task and is mainly attempted using two different types of approaches. Similarity-based approaches attempt to classify instances based on similarities between text document representations and class description representations. Zero-shot text classification approaches aim to generalize knowledge gained from a training task by assigning appropriate labels of unknown classes to text documents. Although existing studies have already investigated individual approaches to these categories, the experiments in literature do not provide a consistent comparison. This paper addresses this gap by conducting a systematic evaluation of different similarity-based and zero-shot approaches for text classification of unseen classes. Different state-of-the-art approaches are benchmarked on four text classification datasets, including a new dataset from the medical domain. Additionally, novel SimCSE and SBERT-based baselines are proposed, as other baselines used in existing work yield weak classification results and are easily outperformed. Finally, the novel similarity-based Lbl2TransformerVec approach is presented, which outperforms previous state-of-the-art approaches in unsupervised text classification. Our experiments show that similarity-based approaches significantly outperform zero-shot approaches in most cases. Additionally, using SimCSE or SBERT embeddings instead of simpler text representations increases similarity-based classification results even further.
translated by 谷歌翻译
本文提出了一种使用对象检测网络在汽车雷达数据上学习对象的笛卡尔速度的方法。提出的方法是在为速度生成自己的训练信号方面进行的。标签仅用于单帧,定向边界框(OBB)。不需要昂贵的笛卡尔速度或连续序列的标签。一般的想法是在不使用单帧OBB标签的情况下预先培训对象检测网络,然后利用网络的OBB预测未标记的数据进行速度训练。详细说明,使用预测的速度以及未标记框架的更新OBB之间的距离和标记框架的OBB预测之间的距离,将网络对未标记帧的OBB预测更新为标记帧的时间戳,用于生成一个自我的预测。监督速度的训练信号。检测网络体系结构由一个模块扩展,以说明多次扫描的时间关系和一个模块,以明确表示雷达的径向速度测量值。仅首次训练的两步方法使用OBB检测,然后使用训练OBB检测和速度。此外,由雷达径向速度测量产生的伪标记的预训练引导Bootstraps本文的自我监督方法。公开可用的Nuscenes数据集进行的实验表明,所提出的方法几乎达到了完全监督培训的速度估计性能,但不需要昂贵的速度标签。此外,我们优于基线方法,该方法仅使用径向速度测量作为标签。
translated by 谷歌翻译
本文介绍了新型混合体系结构,它们结合了基于网格的处理,以改善基于雷达对象检测网络的检测性能和方向估计。纯粹基于网格的检测模型在输入点云的鸟眼视图(BEV)投影上运行。这些方法通过离散的网格分辨率损失了详细信息的损失。这特别适用于雷达对象检测,其中相对粗糙的网格分辨率通常用于解释雷达点云的稀疏性。相反,基于点的模型不会受到此问题的影响,因为它们在没有离散化的情况下处理点云。但是,它们通常表现出比基于网格的方法更差的检测性能。我们表明,基于点的模型可以在网格渲染之前提取邻域功能,利用点的确切相对位置。这对于随后的基于网格的卷积检测主链具有重大好处。在公共Nuscenes数据集的实验中,我们的混合体系结构在检测性能方面取得了改进(汽车类的地图比次要的雷达范围提交比仅限雷达提交的地图高19.7%)和方向估计值(11.5%的相对方向改善)比以前文献的网络相比。
translated by 谷歌翻译
异常气道扩张,称为牵引支气管扩张,是特发性肺纤维化(IPF)的典型特征。体积计算断层扫描(CT)成像捕获IPF中逐渐变细的丢失。我们假设气道异常的自动化量化可以提供IPF疾病程度和严重程度的估算。我们提出了一种自动化计算管道,系统地将气道树木从基于深度学习的气道分割中划分到其裂片和世代分支,从而从胸部CT获得气道结构措施。重要的是,透气阻止通过厚波传播的杂散气道分支的发生,并通过图表搜索去除气道树中的环,克服现有气道骨架算法的限制。在14名健康参与者和14名IPF患者之间比较了透气段(跨空间)和透气曲线曲线之间的逐渐变化。 IPF患者中,Airway interberering显着降低,与健康对照相比,Airway曲线曲调显着增加。差异在下叶中最大标记,符合IPF相关损伤的典型分布。透气是一种开源管道,避免了现有的气道定量算法的限制,并具有临床解释性。自动化气道测量可能具有作为IPF严重程度和疾病程度的新型成像生物标志物。
translated by 谷歌翻译
机器学习的一个显着缺点是模型能够更快地解决新问题,而不会忘记获得的知识。为了更好地理解这个问题,已经出现了持续的学习来系统地调查学习协议,其中模型顺序地观察由一系列任务产生的样本。首先,我们提出了一种促进学习和遗忘之间进行权衡的最优性原则。我们从有界合理性的信息化学制定中获得了这一原则,并显示了与其他连续学习方法的联系。其次,基于这一原则,我们提出了一种神经网络层,用于持续学习,称为变分的专家(移动),缓解遗忘,同时使知识有益转移到新任务。我们对MNIST和CIFAR10数据集的变型的实验表明,与最先进的方法相比,移动层的竞争性能。
translated by 谷歌翻译
The performance of inertial navigation systems is largely dependent on the stable flow of external measurements and information to guarantee continuous filter updates and bind the inertial solution drift. Platforms in different operational environments may be prevented at some point from receiving external measurements, thus exposing their navigation solution to drift. Over the years, a wide variety of works have been proposed to overcome this shortcoming, by exploiting knowledge of the system current conditions and turning it into an applicable source of information to update the navigation filter. This paper aims to provide an extensive survey of information aided navigation, broadly classified into direct, indirect, and model aiding. Each approach is described by the notable works that implemented its concept, use cases, relevant state updates, and their corresponding measurement models. By matching the appropriate constraint to a given scenario, one will be able to improve the navigation solution accuracy, compensate for the lost information, and uncover certain internal states, that would otherwise remain unobservable.
translated by 谷歌翻译
We consider infinite horizon Markov decision processes (MDPs) with fast-slow structure, meaning that certain parts of the state space move "fast" (and in a sense, are more influential) while other parts transition more "slowly." Such structure is common in real-world problems where sequential decisions need to be made at high frequencies, yet information that varies at a slower timescale also influences the optimal policy. Examples include: (1) service allocation for a multi-class queue with (slowly varying) stochastic costs, (2) a restless multi-armed bandit with an environmental state, and (3) energy demand response, where both day-ahead and real-time prices play a role in the firm's revenue. Models that fully capture these problems often result in MDPs with large state spaces and large effective time horizons (due to frequent decisions), rendering them computationally intractable. We propose an approximate dynamic programming algorithmic framework based on the idea of "freezing" the slow states, solving a set of simpler finite-horizon MDPs (the lower-level MDPs), and applying value iteration (VI) to an auxiliary MDP that transitions on a slower timescale (the upper-level MDP). We also extend the technique to a function approximation setting, where a feature-based linear architecture is used. On the theoretical side, we analyze the regret incurred by each variant of our frozen-state approach. Finally, we give empirical evidence that the frozen-state approach generates effective policies using just a fraction of the computational cost, while illustrating that simply omitting slow states from the decision modeling is often not a viable heuristic.
translated by 谷歌翻译
In the present work we propose an unsupervised ensemble method consisting of oblique trees that can address the task of auto-encoding, namely Oblique Forest AutoEncoders (briefly OF-AE). Our method is a natural extension of the eForest encoder introduced in [1]. More precisely, by employing oblique splits consisting in multivariate linear combination of features instead of the axis-parallel ones, we will devise an auto-encoder method through the computation of a sparse solution of a set of linear inequalities consisting of feature values constraints. The code for reproducing our results is available at https://github.com/CDAlecsa/Oblique-Forest-AutoEncoders.
translated by 谷歌翻译
When robots learn reward functions using high capacity models that take raw state directly as input, they need to both learn a representation for what matters in the task -- the task ``features" -- as well as how to combine these features into a single objective. If they try to do both at once from input designed to teach the full reward function, it is easy to end up with a representation that contains spurious correlations in the data, which fails to generalize to new settings. Instead, our ultimate goal is to enable robots to identify and isolate the causal features that people actually care about and use when they represent states and behavior. Our idea is that we can tune into this representation by asking users what behaviors they consider similar: behaviors will be similar if the features that matter are similar, even if low-level behavior is different; conversely, behaviors will be different if even one of the features that matter differs. This, in turn, is what enables the robot to disambiguate between what needs to go into the representation versus what is spurious, as well as what aspects of behavior can be compressed together versus not. The notion of learning representations based on similarity has a nice parallel in contrastive learning, a self-supervised representation learning technique that maps visually similar data points to similar embeddings, where similarity is defined by a designer through data augmentation heuristics. By contrast, in order to learn the representations that people use, so we can learn their preferences and objectives, we use their definition of similarity. In simulation as well as in a user study, we show that learning through such similarity queries leads to representations that, while far from perfect, are indeed more generalizable than self-supervised and task-input alternatives.
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译